Sparse Recovery by Non - Convex Optimization –
نویسنده
چکیده
In this note, we address the theoretical properties of ∆p, a class of compressed sensing decoders that rely on ℓ p minimization with p ∈ (0, 1) to recover estimates of sparse and compressible signals from incomplete and inaccurate measurements. In particular, we extend the results of Candès, Romberg and Tao [3] and Wojtaszczyk [30] regarding the decoder ∆ 1 , based on ℓ 1 minimization, to ∆p with p ∈ (0, 1). Our results are twofold. First, we show that under certain sufficient conditions that are weaker than the analogous sufficient conditions for ∆ 1 the decoders ∆p are robust to noise and stable in the sense that they are (2, p) instance optimal. Second, we extend the results of Wojtaszczyk to show that, like ∆ 1 , the decoders ∆p are (2, 2) instance optimal in probability provided the measurement matrix is drawn from an appropriate distribution. While the extension of the results of [3] to the setting where p ∈ (0, 1) is straightforward, the extension of the instance optimality in probability result of [30] is non-trivial. In particular, we need to prove that the LQ 1 property, introduced in [30], and shown to hold for Gaussian matrices and matrices whose columns are drawn uniformly from the sphere, generalizes to an LQp property for the same classes of matrices. Our proof is based on a result by Gordon and Kalton [18] about the Banach-Mazur distances of p-convex bodies to their convex hulls.
منابع مشابه
Convex Block-sparse Linear Regression with Expanders - Provably
Sparse matrices are favorable objects in machine learning and optimization. When such matrices are used, in place of dense ones, the overall complexity requirements in optimization can be significantly reduced in practice, both in terms of space and run-time. Prompted by this observation, we study a convex optimization scheme for block-sparse recovery from linear measurements. To obtain linear ...
متن کاملA Non-convex Approach for Sparse Recovery with Convergence Guarantee
In the area of sparse recovery, numerous researches hint that non-convex penalties might induce better sparsity than convex ones, but up until now the non-convex algorithms lack convergence guarantee from the initial solution to the global optimum. This paper aims to provide theoretical guarantee for sparse recovery via non-convex optimization. The concept of weak convexity is incorporated into...
متن کاملOn recovery of block-sparse signals via mixed l2/lq (0 < q ¿ 1) norm minimization
Compressed sensing (CS) states that a sparse signal can exactly be recovered from very few linear measurements. While in many applications, real-world signals also exhibit additional structures aside from standard sparsity. The typical example is the so-called block-sparse signals whose non-zero coefficients occur in a few blocks. In this article, we investigate the mixed l2/lq(0 < q ≤ 1) norm ...
متن کاملOn Accelerated Hard Thresholding Methods for Sparse Approximation
We propose and analyze acceleration schemes for hard thresholding methods with applications to sparse approximation in linear inverse systems. Our acceleration schemes fuse combinatorial, sparse projection algorithms with convex optimization algebra to provide computationally efficient and robust sparse recovery methods. We compare and contrast the (dis)advantages of the proposed schemes with t...
متن کاملConfidence-constrained joint sparsity recovery under the Poisson noise model
Our work is focused on the joint sparsity recovery problem where the common sparsity pattern is corrupted by Poisson noise. We formulate the confidence-constrained optimization problem in both least squares (LS) and maximum likelihood (ML) frameworks and study the conditions for perfect reconstruction of the original row sparsity and row sparsity pattern. However, the confidence-constrained opt...
متن کاملSparse Signal Recovery from Quadratic Measurements via Convex Programming
In this paper we consider a system of quadratic equations |〈zj ,x〉|2 = bj , j = 1, ...,m, where x ∈ R is unknown while normal random vectors zj ∈ R and quadratic measurements bj ∈ R are known. The system is assumed to be underdetermined, i.e., m < n. We prove that if there exists a sparse solution x i.e., at most k components of x are non-zero, then by solving a convex optimization program, we ...
متن کامل